5 research outputs found

    Video dataset of human demonstrations of folding clothing for robotic folding

    Get PDF
    General-purpose clothes-folding robots do not yet exist owing to the deformable nature of textiles, making it hard to engineer manipulation pipelines or learn this task. In order to accelerate research for the learning of the robotic clothes folding task, we introduce a video dataset of human folding demonstrations. In total, we provide 8.5 hours of demonstrations from multiple perspectives leading to 1,000 folding samples of different types of textiles. The demonstrations are recorded in multiple public places, in different conditions with a diverse set of people. Our dataset consists of anonymized RGB images, depth frames, skeleton keypoint trajectories, and object labels. In this article, we describe our recording setup, the data format, and utility scripts, which can be accessed at https://adverley.github.io/folding-demonstrations

    A comparative analysis on genome pleiotropy for evolved soft robots

    No full text
    Biological evolution shapes the body and brain of living creatures together over time. By contrast, in evolutionary robotics, brain-body co-optimization remains challenging. Conflicting mutations cause dissociation between morphology and control, which leads to premature convergence. Recent works have proposed algorithmic modifications to mitigate the impact of conflicting mutations. However, the importance of genetic design remains underexposed. Current approaches are divided between a single, pleiotropic genetic encoding and two isolated encodings representing morphology and control. This design choice is commonly made ad hoc, causing a lack of consistency for practitioners. To standardize this design, we performed a comparative analysis between these two configurations and two previously unexplored alternatives on a soft robot locomotion task

    Learning self-supervised task progression metrics : a case of cloth folding

    No full text
    An important challenge for smart manufacturing systems is finding relevant metrics that capture task quality and progression for process monitoring to ensure process reliability and safety. Data-driven process metrics construct features and labels from abundant raw process data, which incurs costs and inaccuracies due to the labelling process. In this work, we circumvent expensive process data labelling by distilling the task intent from video demonstrations. We present a method to express the task intent in the form of a scalar value by aligning a self-supervised learned embedding to a small set of high-quality task demonstrations. We evaluate our method on the challenging case of monitoring the progress of people folding clothing. We demonstrate that our approach effectively learns to represent task progression without manually labelling sub-steps or progress in the videos. Using case-based experiments, we find that our method learns task-relevant features and useful invariances, making it robust to noise, distractors and variations in the task and shirts. The experimental results show that the proposed method can monitor processes in domains where state representation is inherently challenging

    A comparative analysis on genome pleiotropy for evolved soft robots

    No full text
    Biological evolution shapes the body and brain of living creatures together over time. By contrast, in evolutionary robotics, the co-optimization of these subsystems remains challenging. Conflicting mutations cause dissociation between morphology and control, which leads to premature convergence. Recent works have proposed algorithmic modifications to mitigate the impact of conflicting mutations. However, the importance of genetic design remains underexposed. Current approaches are divided between a single, pleiotropic genetic encoding and two isolated encodings representing morphology and control. This design choice is commonly made ad hoc, causing a lack of consistency for practitioners. To standardize this design, we performed a comparative analysis between these two configurations on a soft robot locomotion task. Additionally, we incorporated two currently unexplored alternatives that drive these configurations to their logical extremes. Our results demonstrate that pleiotropic representations yield superior performance in fitness and robustness towards premature convergence. Moreover, we showcase the importance of shared structure in the pleiotropic representation of robot morphology and control to achieve this performance gain. These findings provide valuable insights into genetic encoding design, which supply practitioners with a theoretical foundation to pursue efficient brain-body co-optimization
    corecore